15 research outputs found

    Lattice QCD Thermodynamics on the Grid

    Full text link
    We describe how we have used simultaneously O(103){\cal O}(10^3) nodes of the EGEE Grid, accumulating ca. 300 CPU-years in 2-3 months, to determine an important property of Quantum Chromodynamics. We explain how Grid resources were exploited efficiently and with ease, using user-level overlay based on Ganga and DIANE tools above standard Grid software stack. Application-specific scheduling and resource selection based on simple but powerful heuristics allowed to improve efficiency of the processing to obtain desired scientific results by a specified deadline. This is also a demonstration of combined use of supercomputers, to calculate the initial state of the QCD system, and Grids, to perform the subsequent massively distributed simulations. The QCD simulation was performed on a 163Ă—416^3\times 4 lattice. Keeping the strange quark mass at its physical value, we reduced the masses of the up and down quarks until, under an increase of temperature, the system underwent a second-order phase transition to a quark-gluon plasma. Then we measured the response of this system to an increase in the quark density. We find that the transition is smoothened rather than sharpened. If confirmed on a finer lattice, this finding makes it unlikely for ongoing experimental searches to find a QCD critical point at small chemical potential

    Cloud storage services for file synchronization and sharing in science, education and research

    No full text
    Cloud synchronization and sharing services (CS3) for Science, Education and Research aim at providing new ways of accessing, sharing and interaction with existing data repositories as well as providing new type of storage services. We present the background and context in which CS3 services developed. We also introduce selected scientific contributions from the CS3 community, and from relevant research beyond, to illustrate some of the technical challenges for on-premise file sync/share services in Education and Research

    Benchmarking and monitoring framework for interconnected file synchronization and sharing services

    No full text
    On-premise file synchronization and sharing services are increasingly used in research collaborations and academia. The main motivation for the on-premise deployment is connected with the requirements on the physical location of the data, data protection policies and integration with existing computing and storage infrastructure in the research labs. In this work we present a benchmarking and monitoring framework for file synchronization and sharing services. It allows service providers to monitor the operational status of their services, understand the service behavior under different load types and with different network locations of the synchronization clients. The framework is designed as a monitoring and benchmarking tool to provide performance and robustness metrics for interconnected file synchronization and sharing services such as Open Cloud Mesh

    Increasing interoperability for research clouds: CS3APIs for connecting sync&share storage, applications and science environments

    Get PDF
    Cloud Services for Synchronization and Sharing (CS3) [14] have become increasing popular in the European Education and Research landscape in the last years. Services such as CERNBox, SWITCHdrive, SURFdrive, PSNCBox, Sciebo, CloudStor and many more have become indispensable in everyday work for scientists, engineers, educators and other users in public research and education sector. CS3 services are currently too fragmented and lack interoperability. To fix this problem and interconnect storage-, applicationand research services a set of interoperable interfaces, CS3APIs [10], has been developed. CS3APIs enable creation of easily-accessible and integrated science environments, facilitating cross-institutional research activities and avoiding fragmented silos based on ad-hoc solutions. In this paper we introduce the CS3APIs and its reference implementation, Reva [16]

    Increasing interoperability for research clouds: CS3APIs for connecting sync&share storage, applications and science environments

    No full text
    Cloud Services for Synchronization and Sharing (CS3) [14] have become increasing popular in the European Education and Research landscape in the last years. Services such as CERNBox, SWITCHdrive, SURFdrive, PSNCBox, Sciebo, CloudStor and many more have become indispensable in everyday work for scientists, engineers, educators and other users in public research and education sector. CS3 services are currently too fragmented and lack interoperability. To fix this problem and interconnect storage-, applicationand research services a set of interoperable interfaces, CS3APIs [10], has been developed. CS3APIs enable creation of easily-accessible and integrated science environments, facilitating cross-institutional research activities and avoiding fragmented silos based on ad-hoc solutions. In this paper we introduce the CS3APIs and its reference implementation, Reva [16]

    ScienceBox Converging to Kubernetes containers in production for on-premise and hybrid clouds for CERNBox, SWAN, and EOS

    Get PDF
    Docker containers are the de-facto standard to package, distribute, and run applications on cloud-based infrastructures. Commercial providers and private clouds expand their offer with container orchestration engines, making the management of resources and containerized applications tightly integrated. The Storage Group of CERN IT leverages on container technologies to provide ScienceBox: An integrated software bundle with storage and computing services for general purposes and scientific use. ScienceBox features distributed scalable storage, sync&share functionalities, and a web-based data analysis service, and can be deployed on a single machine or scaled-out across multiple servers. ScienceBox has proven to be helpful in different contexts, from High Energy Physics analysis to education for high schools, and has been successfully deployed on different cloud infrastructure and heterogeneous hardware

    ScienceBox Converging to Kubernetes containers in production for on-premise and hybrid clouds for CERNBox, SWAN, and EOS

    No full text
    Docker containers are the de-facto standard to package, distribute, and run applications on cloud-based infrastructures. Commercial providers and private clouds expand their offer with container orchestration engines, making the management of resources and containerized applications tightly integrated. The Storage Group of CERN IT leverages on container technologies to provide ScienceBox: An integrated software bundle with storage and computing services for general purposes and scientific use. ScienceBox features distributed scalable storage, sync&share; functionalities, and a web-based data analysis service, and can be deployed on a single machine or scaled-out across multiple servers. ScienceBox has proven to be helpful in different contexts, from High Energy Physics analysis to education for high schools, and has been successfully deployed on different cloud infrastructure and heterogeneous hardware

    Scheduling for Responsive Grids

    Get PDF
    Grids are facing the challenge of seamless integration of the Grid power into everyday use. One critical component for this integration is responsiveness, the capacity to support on-demand computing and interactivity. Grid scheduling is involved at two levels in order to provide responsiveness: the policy level and the implementation level. The main contributions of this paper are as follows. First, we present a detailed analysis of the performance of the EGEE Grid with respect to responsiveness. Second, we examine two user-level schedulers located between the general scheduling layer and the application layer. These are the DIANE (distributed analysis environment) framework, a general-purpose overlay system, and a specialized, embedded scheduler for gPTM3D, an interactive medical image analysis application. Finally, we define and demonstrate a virtualization scheme, which achieves guaranteed turnaround time, schedulability analysis, and provides the basis for differentiated services. Both methods target a brokering-based system organized as a federation of batch-scheduled clusters, and an EGEE implementation is described
    corecore